1,025 research outputs found

    Proving Correctness and Completeness of Normal Programs - a Declarative Approach

    Full text link
    We advocate a declarative approach to proving properties of logic programs. Total correctness can be separated into correctness, completeness and clean termination; the latter includes non-floundering. Only clean termination depends on the operational semantics, in particular on the selection rule. We show how to deal with correctness and completeness in a declarative way, treating programs only from the logical point of view. Specifications used in this approach are interpretations (or theories). We point out that specifications for correctness may differ from those for completeness, as usually there are answers which are neither considered erroneous nor required to be computed. We present proof methods for correctness and completeness for definite programs and generalize them to normal programs. For normal programs we use the 3-valued completion semantics; this is a standard semantics corresponding to negation as finite failure. The proof methods employ solely the classical 2-valued logic. We use a 2-valued characterization of the 3-valued completion semantics which may be of separate interest. The presented methods are compared with an approach based on operational semantics. We also employ the ideas of this work to generalize a known method of proving termination of normal programs.Comment: To appear in Theory and Practice of Logic Programming (TPLP). 44 page

    Using parametric set constraints for locating errors in CLP programs

    Full text link
    This paper introduces a framework of parametric descriptive directional types for constraint logic programming (CLP). It proposes a method for locating type errors in CLP programs and presents a prototype debugging tool. The main technique used is checking correctness of programs w.r.t. type specifications. The approach is based on a generalization of known methods for proving correctness of logic programs to the case of parametric specifications. Set-constraint techniques are used for formulating and checking verification conditions for (parametric) polymorphic type specifications. The specifications are expressed in a parametric extension of the formalism of term grammars. The soundness of the method is proved and the prototype debugging tool supporting the proposed approach is illustrated on examples. The paper is a substantial extension of the previous work by the same authors concerning monomorphic directional types.Comment: 64 pages, To appear in Theory and Practice of Logic Programmin

    Hybrid Rules with Well-Founded Semantics

    Get PDF
    A general framework is proposed for integration of rules and external first order theories. It is based on the well-founded semantics of normal logic programs and inspired by ideas of Constraint Logic Programming (CLP) and constructive negation for logic programs. Hybrid rules are normal clauses extended with constraints in the bodies; constraints are certain formulae in the language of the external theory. A hybrid program is a pair of a set of hybrid rules and an external theory. Instances of the framework are obtained by specifying the class of external theories, and the class of constraints. An example instance is integration of (non-disjunctive) Datalog with ontologies formalized as description logics. The paper defines a declarative semantics of hybrid programs and a goal-driven formal operational semantics. The latter can be seen as a generalization of SLS-resolution. It provides a basis for hybrid implementations combining Prolog with constraint solvers. Soundness of the operational semantics is proven. Sufficient conditions for decidability of the declarative semantics, and for completeness of the operational semantics are given

    Using global analysis, partial specifications, and an extensible assertion language for program validation and debugging

    Get PDF
    We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis

    Optimizing the computation of overriding

    Full text link
    We introduce optimization techniques for reasoning in DLN---a recently introduced family of nonmonotonic description logics whose characterizing features appear well-suited to model the applicative examples naturally arising in biomedical domains and semantic web access control policies. Such optimizations are validated experimentally on large KBs with more than 30K axioms. Speedups exceed 1 order of magnitude. For the first time, response times compatible with real-time reasoning are obtained with nonmonotonic KBs of this size

    An assertion language for constraint logic programs

    Full text link
    In an advanced program development environment, such as that discussed in the introduction of this book, several tools may coexist which handle both the program and information on the program in different ways. Also, these tools may interact among themselves and with the user. Thus, the different tools and the user need some way to communicate. It is our design principie that such communication be performed in terms of assertions. Assertions are syntactic objects which allow expressing properties of programs. Several assertion languages have been used in the past in different contexts, mainly related to program debugging. In this chapter we propose a general language of assertions which is used in different tools for validation and debugging of constraint logic programs in the context of the DiSCiPl project. The assertion language proposed is parametric w.r.t. the particular constraint domain and properties of interest being used in each different tool. The language proposed is quite general in that it poses few restrictions on the kind of properties which may be expressed. We believe the assertion language we propose is of practical relevance and appropriate for the different uses required in the tools considered

    Limits on long-time-scale radio transients at 150 MHz using the TGSS ADR1 and LoTSS DR2 catalogues

    Get PDF
    We present a search for transient radio sources on timescales of 2 to 9 yr at 150 MHz. This search is conducted by comparing the first Alternative Data Release of the TIFR GMRT Sky Survey (TGSS ADR1) and the second data release of the LOFAR Two-metre Sky Survey (LoTSS DR2). The overlapping survey area covers 5570 deg2\rm{deg}^2 on the sky, or 14 per cent of the total sky. We introduce a method to compare the source catalogues that involves a pair match of sources, a flux density cutoff to meet the survey completeness limit and a newly developed compactness criterion. This method is used to identify both transient candidates in the TGSS source catalogue that have no counterpart in the LoTSS catalogue and transient candidates in LoTSS without a counterpart in TGSS. We find that imaging artefacts and uncertainties and variations in the flux density scales complicate the transient search. Our method to search for transients by comparing two different surveys, while taking into account imaging artefacts around bright sources and misaligned flux scales between surveys, is universally applicable to future radio transient searches. No transient sources were identified, but we are able to place an upper limit on the transient surface density of <5.4104 deg2<5.4 \cdot 10^{-4}\ \text{deg}^{-2} at 150 MHz for compact sources with an integrated flux density over 100 mJy. Here we define a transient as a compact source with flux density greater than 100 mJy that appears in the catalogue of one survey without a counterpart in the other survey.Comment: 14 pages, 11 figure

    Faraday tomography of LoTSS-DR2 data: I. Faraday moments in the high-latitude outer Galaxy and revealing Loop III in polarisation

    Get PDF
    Observations of synchrotron emission at low radio frequencies reveal a labyrinth of polarised Galactic structures. However, the explanation for the wealth of structures remains uncertain due to the complex interactions between the interstellar medium and the magnetic field. A multi-tracer approach to the analysis of large sky areas is needed. This paper aims to use polarimetric images from the LOFAR Two metre Sky Survey (LoTSS) to produce the biggest mosaic of polarised emission in the northern sky at low radio frequencies (150 MHz) to date. The large area this mosaic covers allows for detailed morphological and statistical studies of polarised structures in the high-latitude outer Galaxy, including the well-known Loop III region. We produced a 3100 square degree Faraday tomographic cube using a rotation measure synthesis tool. We calculated the statistical moments of Faraday spectra and compared them with data sets at higher frequencies (1.4 GHz) and with a map of a rotation measure derived from extragalactic sources. The mosaic is dominated by polarised emission connected to Loop III. Additionally, the mosaic reveals an abundance of other morphological structures, mainly {narrow and extended} depolarisation canals, which are found to be ubiquitous. We find a correlation between the map of an extragalactic rotation measure and the LoTSS first Faraday moment image. The ratio of the two deviates from a simple model of a Burn slab (Burn 1966) along the line of sight, which highlights the high level of complexity in the magnetoionic medium that can be studied at these frequencies.Comment: 20 pages, 25 figures, accepted for publication in A&

    The contribution of discrete sources to the sky temperature at 144 MHz

    Get PDF
    This paper is part of the 1st data release of the LoTSS Deep Fields. © 2020 The European Southern Observatory (ESO)In recent years, the level of the extragalactic radio background has become a point of considerable interest, with some lines of argument pointing to an entirely new cosmological synchrotron background. The contribution of the known discrete source population to the sky temperature is key to this discussion. Because of the steep spectral index of the excess over the cosmic microwave background, it is best studied at low frequencies where the signal is strongest. The Low-Frequency Array (LOFAR) wide and deep sky surveys give us the best constraints yet on the contribution of discrete extragalactic sources at 144 MHz, and in particular allow us to include contributions from diffuse, low-surface-brightness emission that could not be fully accounted for in previous work. We show that, even with these new data, known sources can still only account for around a quarter of the estimated extragalactic sky temperature at LOFAR frequencies.Peer reviewedFinal Accepted Versio

    Context-sensitive multivariant assertion checking in modular programs

    Get PDF
    We propose a modular, assertion-based system for verification and debugging of large logic programs, together with several interesting models for checking assertions statically in modular programs, each with different characteristics and representing different trade-offs. Our proposal is a modular and multivariant extensión of our previously proposed abstract assertion checking model and we also report on its implementation in the CiaoPP system. In our approach, the specification of the program, given by a set of assertions, may be partial, instead of the complete specification required by raditional verification systems. Also, the system can deal with properties which cannot always be determined at compile-time. As a result, the proposed system needs to work with safe approximations: all assertions proved correct are guaranteed to be valid and all errors actual errors. The use of modular, context-sensitive static analyzers also allows us to introduce a new distinction between assertions checked in a particular context or checked in general
    corecore